Personnel
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Machine Learning

In this section, we report on some neuronal adaptive mechanisms, that we develop at the frontier between Machine Learning and Computational Neuroscience. Our goal is to consider and adapt models in Machine Learning for their integration in a bio-inspired framework. We were interested this year in three paradigms of computation.

The first paradigm concerns the manipulation of temporal sequences. In a perspective of better understanding how the brain learns structured sequences we work on a model on syntax acquisition and Human-Robot Interaction using the Reservoir Computing framework (using random recurrent networks) [24], [15], [17] with our collaborators at the University of Hamburg (cf. § 9.3). A syntactic re-analysis system [15], which corrects syntax errors in speech recognition hypotheses, was built in order to enhance vocal Human-Robot Interaction and to enhance the previously developped model [40]. Additionally, the ability to deal with several languages (from different language families) of this later model of sentence parsing [40] was evaluated. We showed that it can successfully learn to parse sentences related to home scenarios in fifteen languages originating from Europe and Asia [24]. In a different perspective, in order to try to overcome word misrecognition at a more basic level, we tested whether the same architecture was able to process directly phonemes instead of grammatical constructions [17]. Applied on a small corpus, we see that the model has similar performance.

In an industrial application for the representation of electrical diagrams (cf. § 8.1), we also study how recurrent layered models can be trained to run through these schemes for prediction and sequence representation tasks [10].

The second paradigm concerns the extraction of characteristics and the use of hierarchical networks, as in the case of deep networks. An industrial application (cf. § 9.2) allows us to revisit these models to make them more easily usable in constrained frameworks, for example with limited size corpuses, and more interpretable introducing a new notion of prototypes and exploring the capability to learn the network architecture itself (using shortcuts) [11]. In order to push the sate of the art, the next step is going to consider not only feed-forward but also recurrent architecture, and to this end neural network recurrent weight estimation through backward tuning has been revisited [21].

The third paradigm is about spatial computation. We have designed a graphical method originating from the computer graphics domain that is used for the arbitrary and intuitive placement of cells over a two-dimensional manifold. Using a bitmap image as input, where the color indicates the identity of the different structures and the alpha channel indicates the local cell density, this method guarantees a discrete distribution of cell position respecting the local density function. This method scales to any number of cells, allows to specify several different structures at once with arbitrary shapes and provides a scalable and versatile alternative to the more classical assumption of a uniform non-spatial distribution. This preliminary work will be used in the design of a new class of model where explicit topography allows to connect structure according to known pathways.